Unsupervised image registration commonly adopts U-Net style networks to predict dense displacement fields in the full-resolution spatial domain. For high-resolution volumetric image data, this process is however resource intensive and time-consuming. To tackle this problem, we propose the Fourier-Net, replacing the expansive path in a U-Net style network with a parameter-free model-driven decoder. Specifically, instead of our Fourier-Net learning to output a full-resolution displacement field in the spatial domain, we learn its low-dimensional representation in a band-limited Fourier domain. This representation is then decoded by our devised model-driven decoder (consisting of a zero padding layer and an inverse discrete Fourier transform layer) to the dense, full-resolution displacement field in the spatial domain. These changes allow our unsupervised Fourier-Net to contain fewer parameters and computational operations, resulting in faster inference speeds. Fourier-Net is then evaluated on two public 3D brain datasets against various state-of-the-art approaches. For example, when compared to a recent transformer-based method, i.e., TransMorph, our Fourier-Net, only using 0.22$\%$ of its parameters and 6.66$\%$ of the mult-adds, achieves a 0.6\% higher Dice score and an 11.48$\times$ faster inference speed. Code is available at \url{https://github.com/xi-jia/Fourier-Net}.
translated by 谷歌翻译
Localizing anatomical landmarks are important tasks in medical image analysis. However, the landmarks to be localized often lack prominent visual features. Their locations are elusive and easily confused with the background, and thus precise localization highly depends on the context formed by their surrounding areas. In addition, the required precision is usually higher than segmentation and object detection tasks. Therefore, localization has its unique challenges different from segmentation or detection. In this paper, we propose a zoom-in attentive network (ZIAN) for anatomical landmark localization in ocular images. First, a coarse-to-fine, or "zoom-in" strategy is utilized to learn the contextualized features in different scales. Then, an attentive fusion module is adopted to aggregate multi-scale features, which consists of 1) a co-attention network with a multiple regions-of-interest (ROIs) scheme that learns complementary features from the multiple ROIs, 2) an attention-based fusion module which integrates the multi-ROIs features and non-ROI features. We evaluated ZIAN on two open challenge tasks, i.e., the fovea localization in fundus images and scleral spur localization in AS-OCT images. Experiments show that ZIAN achieves promising performances and outperforms state-of-the-art localization methods. The source code and trained models of ZIAN are available at https://github.com/leixiaofeng-astar/OMIA9-ZIAN.
translated by 谷歌翻译
昼夜节律的破坏是阿尔茨海默氏病(AD)患者的基本症状。人类脑中基因表达的完整昼夜节律编排及其与AD的固有关联仍然很大程度上是未知的。我们提出了一种新颖的综合方法,即Prime,以检测和分析在多个数据集中不合时宜的高维基因表达数据中的节奏振荡模式。为了证明Prime的实用性,首先,我们通过从小鼠肝脏中的时间课程表达数据集作为跨物种和跨器官验证来对其进行验证。然后,我们将其应用于研究来自19个对照和AD患者的19个人脑区域的未接收基因组基因表达中的振荡模式。我们的发现揭示了15对控制大脑区域中清晰,同步的振荡模式,而这些振荡模式要么消失或昏暗。值得注意的是,Prime在不需要样品的时间戳而发现昼夜节律的节奏模式。 Prime的代码以及在本文中复制数字的代码,可在https://github.com/xinxingwu-uk/prime上获得。
translated by 谷歌翻译
在图像识别中已广泛提出了生成模型,以生成更多图像,其中分布与真实图像相似。它通常会引入一个歧视网络,以区分真实数据与生成的数据。这样的模型利用了一个歧视网络,该网络负责以区分样式从目标数据集中包含的数据传输的数据。但是,这样做的网络着重于强度分布的差异,并可能忽略数据集之间的结构差异。在本文中,我们制定了一个新的图像到图像翻译问题,以确保生成的图像的结构类似于目标数据集中的图像。我们提出了一个简单但功能强大的结构不稳定的对抗(SUA)网络,该网络在执行图像分割时介绍了训练和测试集之间的强度和结构差异。它由空间变换块组成,然后是强度分布渲染模块。提出了空间变换块来减少两个图像之间的结构缝隙,还产生了一个反变形字段,以使最终的分段图像背部扭曲。然后,强度分布渲染模块将变形结构呈现到具有目标强度分布的图像。实验结果表明,所提出的SUA方法具有在多个数据集之间传递强度分布和结构含量的能力。
translated by 谷歌翻译
With the rapid development of artificial intelligence (AI) in medical image processing, deep learning in color fundus photography (CFP) analysis is also evolving. Although there are some open-source, labeled datasets of CFPs in the ophthalmology community, large-scale datasets for screening only have labels of disease categories, and datasets with annotations of fundus structures are usually small in size. In addition, labeling standards are not uniform across datasets, and there is no clear information on the acquisition device. Here we release a multi-annotation, multi-quality, and multi-device color fundus image dataset for glaucoma analysis on an original challenge -- Retinal Fundus Glaucoma Challenge 2nd Edition (REFUGE2). The REFUGE2 dataset contains 2000 color fundus images with annotations of glaucoma classification, optic disc/cup segmentation, as well as fovea localization. Meanwhile, the REFUGE2 challenge sets three sub-tasks of automatic glaucoma diagnosis and fundus structure analysis and provides an online evaluation framework. Based on the characteristics of multi-device and multi-quality data, some methods with strong generalizations are provided in the challenge to make the predictions more robust. This shows that REFUGE2 brings attention to the characteristics of real-world multi-domain data, bridging the gap between scientific research and clinical application.
translated by 谷歌翻译
图形神经网络已用于各种学习任务,例如链接预测,节点分类和节点群集。其中,链接预测是一项相对研究的图形学习任务,其当前最新模型基于浅层图自动编码器(GAE)体系结构的一层或两层。在本文中,我们专注于解决链接预测的当前方法的局限性,该预测只能使用浅的GAE和变分GAE,并创建有效的方法来加深(变异)GAE架构以实现稳定和竞争性的性能。我们提出的方法是创新的方法将标准自动编码器(AES)纳入GAE的体系结构,在该体系结构中,标准AE被利用以通过无缝整合邻接信息和节点来学习必要的,低维的表示,而GAE则进一步构建了多尺度的低规模的低尺度低尺度的低尺度。通过残差连接的维度表示,以学习紧凑的链接预测的整体嵌入。从经验上讲,在各种基准测试数据集上进行的广泛实验验证了我们方法的有效性,并证明了我们加深的图形模型以进行链接预测的竞争性能。从理论上讲,我们证明我们的深度扩展包括具有不同阶的多项式过滤器。
translated by 谷歌翻译
特征选择通过识别最具信息性功能的子集来减少数据的维度。在本文中,我们为无监督的特征选择提出了一种创新的框架,称为分形Automencoders(FAE)。它列举了一个神经网络,以确定全球探索能力和局部挖掘的多样性的信息。架构上,FAE通过添加一对一的评分层和小子神经网络来扩展AutoEncoders,以便以无监督的方式选择特征选择。通过这种简洁的建筑,Fae实现了最先进的表演;在十四个数据集中的广泛实验结果,包括非常高维数据,已经证明了FAE对未经监督特征选择的现有现代方法的优越性。特别是,FAE对基因表达数据探索具有实质性优势,通过广泛使用的L1000地标基因将测量成本降低约15美元。此外,我们表明FAE框架与应用程序很容易扩展。
translated by 谷歌翻译
特征选择作为一种重要的尺寸减少技术,通过识别输入特征的基本子集来减少数据维度,这可以促进可解释的洞察学习和推理过程。算法稳定性是关于其对输入样本扰动的敏感性的算法的关键特征。在本文中,我们提出了一种创新的无监督特征选择算法,其具有可提供保证的这种稳定性。我们的算法的体系结构包括一个特征得分手和特征选择器。得分手训练了一个神经网络(NN)来全局评分所有功能,并且选择器采用从属子NN,以在本地评估选择特征的表示能力。此外,我们提供算法稳定性分析,并显示我们的算法通过泛化误差绑定的性能保证。实际数据集的广泛实验结果表明了我们所提出的算法的卓越泛化性能,以强大的基线方法。此外,我们的理论分析和我们算法选择特征的稳定性揭示的属性是经验证实的。
translated by 谷歌翻译
Panoptic Part Segmentation (PPS) unifies panoptic segmentation and part segmentation into one task. Previous works utilize separated approaches to handle thing, stuff, and part predictions without shared computation and task association. We aim to unify these tasks at the architectural level, designing the first end-to-end unified framework named Panoptic-PartFormer. Moreover, we find the previous metric PartPQ biases to PQ. To handle both issues, we make the following contributions: Firstly, we design a meta-architecture that decouples part feature and things/stuff feature, respectively. We model things, stuff, and parts as object queries and directly learn to optimize all three forms of prediction as a unified mask prediction and classification problem. We term our model as Panoptic-PartFormer. Secondly, we propose a new metric Part-Whole Quality (PWQ) to better measure such task from both pixel-region and part-whole perspectives. It can also decouple the error for part segmentation and panoptic segmentation. Thirdly, inspired by Mask2Former, based on our meta-architecture, we propose Panoptic-PartFormer++ and design a new part-whole cross attention scheme to further boost part segmentation qualities. We design a new part-whole interaction method using masked cross attention. Finally, the extensive ablation studies and analysis demonstrate the effectiveness of both Panoptic-PartFormer and Panoptic-PartFormer++. Compared with previous Panoptic-PartFormer, our Panoptic-PartFormer++ achieves 2% PartPQ and 3% PWQ improvements on the Cityscapes PPS dataset and 5% PartPQ on the Pascal Context PPS dataset. On both datasets, Panoptic-PartFormer++ achieves new state-of-the-art results with a significant cost drop of 70% on GFlops and 50% on parameters. Our models can serve as a strong baseline and aid future research in PPS. Code will be available.
translated by 谷歌翻译
Rankings are widely collected in various real-life scenarios, leading to the leakage of personal information such as users' preferences on videos or news. To protect rankings, existing works mainly develop privacy protection on a single ranking within a set of ranking or pairwise comparisons of a ranking under the $\epsilon$-differential privacy. This paper proposes a novel notion called $\epsilon$-ranking differential privacy for protecting ranks. We establish the connection between the Mallows model (Mallows, 1957) and the proposed $\epsilon$-ranking differential privacy. This allows us to develop a multistage ranking algorithm to generate synthetic rankings while satisfying the developed $\epsilon$-ranking differential privacy. Theoretical results regarding the utility of synthetic rankings in the downstream tasks, including the inference attack and the personalized ranking tasks, are established. For the inference attack, we quantify how $\epsilon$ affects the estimation of the true ranking based on synthetic rankings. For the personalized ranking task, we consider varying privacy preferences among users and quantify how their privacy preferences affect the consistency in estimating the optimal ranking function. Extensive numerical experiments are carried out to verify the theoretical results and demonstrate the effectiveness of the proposed synthetic ranking algorithm.
translated by 谷歌翻译